Generating Out of Distribution Adversarial Attack Using Latent Space Poisoning

نویسندگان

چکیده

Traditional adversarial attacks rely upon the perturbations generated by gradients from network which are generally safeguarded gradient guided search to provide an counterpart network. In this letter, we propose a novel framework generate examples where actual image is not corrupted rather its latent space representation utilized tamper inherent structure of while maintaining perceptual quality intact and act as legitimate data samples. As opposed gradient-based attacks, poisoning exploits inclination classifiers model independent identical distribution training dataset tricks it producing out We train disentangled variational autoencoder (β-VAE) in then add noise using class-conditioned function under constraint that misclassified target label. Our empirical results on MNIST, SVHN, CelebA validate can easily fool robust l 0 , xmlns:xlink="http://www.w3.org/1999/xlink">2 xmlns:xlink="http://www.w3.org/1999/xlink">∞ norm designed provably defense mechanisms. The source code made publicly available at https://github.com/Ujjwal-9/latent-space-poisoning.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

LatentPoison - Adversarial Attacks On The Latent Space

Robustness and security of machine learning (ML) systems are intertwined, wherein a non-robust ML system (classifiers, regressors, etc.) can be subject to attacks using a wide variety of exploits. With the advent of scalable deep learning methodologies, a lot of emphasis has been put on the robustness of supervised, unsupervised and reinforcement learning algorithms. Here, we study the robustne...

متن کامل

Generating Adversarial Examples with Adversarial Networks

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires mor...

متن کامل

Generating an Indoor space routing graph using semantic-geometric method

The development of indoor Location-Based Services faces various challenges that one of which is the method of generating indoor routing graph. Due to the weaknesses of purely geometric methods for generating indoor routing graphs, a semantic-geometric method is proposed to cover the existing gaps in combining the semantic and geometric methods in this study. The proposed method uses the CityGML...

متن کامل

Generating Natural Adversarial Examples

Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the adversarial scenarios where they fail. However, th...

متن کامل

generating a random sample from gamma distribution using generalized exponential distribution.

in this paper, we discuss generating a random sample from gamma distribution using generalized exponential distribution.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Signal Processing Letters

سال: 2021

ISSN: ['1558-2361', '1070-9908']

DOI: https://doi.org/10.1109/lsp.2021.3061327